54 research outputs found

    PRESERVATION FOR FUTURE GENERATIONS: DIGITAL TECHNOLOGIES, DIGITALIZATION, AND EXPERIMENTS WITH CONSUMERS AS PRODUCERS OF INDUSTRIAL HERITAGE DOCUMENTATION

    Get PDF
    As digital documentation and recording technologies have evolved, so has the perception that they are segregated and intended primarily for use in either engineering/scientific or amateur/consumer applications. In contrast to this notion, the three-dimensionality afforded by these technologies differs only when considering them in the order of priorities; laser scanners and related image acquisition technologies document and visualize while inversely, consumer cameras visualize and document. This broad field of digital acquisition technologies has evolved into a heterogeneity of tools that all capture aspects of the physical world with a line drawn between them becoming blurred. Within this evolution, these tools are becoming less expensive, easier to use, and depending upon the application, can be operated successfully by individuals having modest or semi-professional skills. The proliferation of digital documentation technologies, the ease of their use, and the ability to share visual data on the internet allow us to examine the inclusion of digital documentation into the preservation management of historic industrial resource, pushing heritage to the digitalized culture

    Human Services Students Preferences for Master\u27s Level Training

    Get PDF
    Human Services students close to graduation are seeking employment in the field, but many are also considering their future career paths and the training needed to reach their long-term career goals. Knowing if bachelor\u27s level students desire graduate degrees, which focus they prefer, and how they would like to pursue the degrees may contribute to the decision-making of educators, employers, and students. This exploratory study, therefore, examined human services students\u27 preferences for master\u27s level training. Students\u27 responses reflected preferences for several types of master\u27s programs, direct acceptance, and online delivery. These themes and their implications for educators, employers, and students are discussed

    Students\u27 Experiences with Different Course Delivery Modalities: On Campus, Online, and Satellite

    Get PDF
    In an effort to adapt to the technological advances of this century, the training of human services professionals has grown from traditional classrooms and satellite programs to online education. Many human services programs are under pressure from their universities and students to expand into online education. This study examined 252 students’ experiences and perceptions of their Bachelors of Science program as it transitioned to offering courses online in addition to on campus and satellite sites. Students’ narrative responses reflected 4 themes: convenience, interactions, learning preference, and technology. These themes and their implications for educators and students are discussed

    Hardware extensions to make lazy subscription safe

    Get PDF
    Abstract Transactional Lock Elision (TLE) uses Hardware Transactional Memory (HTM) to execute unmodified critical sections concurrently, even if they are protected by the same lock. To ensure correctness, the transactions used to execute these critical sections "subscribe" to the lock by reading it and checking that it is available. A recent paper proposed using the tempting "lazy subscription" optimization for a similar technique in a different context, namely transactional systems that use a single global lock (SGL) to protect all transactional data. We identify several pitfalls that show that lazy subscription is not safe for TLE because unmodified critical sections executing before subscribing to the lock may behave incorrectly in a number of subtle ways. We also show that recently proposed compiler support for modifying transaction code to ensure subscription occurs before any incorrect behavior could manifest is not sufficient to avoid all of the pitfalls we identify. We further argue that extending such compiler support to avoid all pitfalls would add substantial complexity and would usually limit the extent to which subscription can be deferred, undermining the effectiveness of the optimization. Hardware extensions suggested in the recent proposal also do not address all of the pitfalls we identify. In this extended version of our WTTM 2014 paper, we describe hardware extensions that make lazy subscription safe, both for SGL-based transactional systems and for TLE, without the need for special compiler support. We also explain how nontransactional loads can be exploited, if available, to further enhance the effectiveness of lazy subscription

    SCAMP:standardised, concentrated, additional macronutrients, parenteral nutrition in very preterm infants: a phase IV randomised, controlled exploratory study of macronutrient intake, growth and other aspects of neonatal care

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Infants born <29 weeks gestation are at high risk of neurocognitive disability. Early postnatal growth failure, particularly head growth, is an important and potentially reversible risk factor for impaired neurodevelopmental outcome. Inadequate nutrition is a major factor in this postnatal growth failure, optimal protein and calorie (macronutrient) intakes are rarely achieved, especially in the first week. Infants <29 weeks are dependent on parenteral nutrition for the bulk of their nutrient needs for the first 2-3 weeks of life to allow gut adaptation to milk digestion. The prescription, formulation and administration of neonatal parenteral nutrition is critical to achieving optimal protein and calorie intake but has received little scientific evaluation. Current neonatal parenteral nutrition regimens often rely on individualised prescription to manage the labile, unpredictable biochemical and metabolic control characteristic of the early neonatal period. Individualised prescription frequently fails to translate into optimal macronutrient delivery. We have previously shown that a standardised, concentrated neonatal parenteral nutrition regimen can optimise macronutrient intake.</p> <p>Methods</p> <p>We propose a single centre, randomised controlled exploratory trial of two standardised, concentrated neonatal parenteral nutrition regimens comparing a standard macronutrient content (maximum protein 2.8 g/kg/day; lipid 2.8 g/kg/day, dextrose 10%) with a higher macronutrient content (maximum protein 3.8 g/kg/day; lipid 3.8 g/kg/day, dextrose 12%) over the first 28 days of life. 150 infants 24-28 completed weeks gestation and birthweight <1200 g will be recruited. The primary outcome will be head growth velocity in the first 28 days of life. Secondary outcomes will include a) auxological data between birth and 36 weeks corrected gestational age b) actual macronutrient intake in first 28 days c) biomarkers of biochemical and metabolic tolerance d) infection biomarkers and other intravascular line complications e) incidence of major complications of prematurity including mortality f) neurodevelopmental outcome at 2 years corrected gestational age</p> <p>Trial registration</p> <p>Current controlled trials: <a href="http://www.controlled-trials.com/ISRCTN76597892">ISRCTN76597892</a>; EudraCT Number: 2008-008899-14</p

    Segmentation of corpus callosum using diffusion tensor imaging: validation in patients with glioblastoma

    Get PDF
    Abstract Background This paper presents a three-dimensional (3D) method for segmenting corpus callosum in normal subjects and brain cancer patients with glioblastoma. Methods Nineteen patients with histologically confirmed treatment naïve glioblastoma and eleven normal control subjects underwent DTI on a 3T scanner. Based on the information inherent in diffusion tensors, a similarity measure was proposed and used in the proposed algorithm. In this algorithm, diffusion pattern of corpus callosum was used as prior information. Subsequently, corpus callosum was automatically divided into Witelson subdivisions. We simulated the potential rotation of corpus callosum under tumor pressure and studied the reproducibility of the proposed segmentation method in such cases. Results Dice coefficients, estimated to compare automatic and manual segmentation results for Witelson subdivisions, ranged from 94% to 98% for control subjects and from 81% to 95% for tumor patients, illustrating closeness of automatic and manual segmentations. Studying the effect of corpus callosum rotation by different Euler angles showed that although segmentation results were more sensitive to azimuth and elevation than skew, rotations caused by brain tumors do not have major effects on the segmentation results. Conclusions The proposed method and similarity measure segment corpus callosum by propagating a hyper-surface inside the structure (resulting in high sensitivity), without penetrating into neighboring fiber bundles (resulting in high specificity)

    Kernel regression estimation of fiber orientation mixtures in Diffusion MRI

    Get PDF
    We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field
    corecore